Ranking SVM

Ranking SVM is an application of Support vector machine, which is used to solve certain ranking problems. The algorithm of ranking SVM was published by Torsten Joachims by 2003.[1] The original purpose of of Ranking SVM is to improve the performance of the internet search engine. However, it was found that Ranking SVM also can be used to solve other problems such as Rank SIFT.[2]

Contents

Description

Ranking SVM, one of the pair-wise ranking methods, which is used to adaptively sort the web-pages by their relationships (how relevant) to a specific query. A mapping function is required to define such relationship. The mapping function projects each data pair (inquire and clicked web-page) onto a feature space. These features combined with user’s click-through data (which implies page ranks for a specific query) can be considered as the training data for machine learning algorithms.

Generally, Ranking SVM includes three steps in the training period:

  1. It maps the similarities between queries and the clicked pages onto certain feature space.
  2. It calculates the distances between any two of the vectors obtained in step 1.
  3. It forms optimization problem which is similar to SVM classification and solve such problem with the regular SVM solver.

Background

Ranking Method

Suppose \mathbb{C} is a data set containing C elements c_i.  r is a ranking method applied to \mathbb{C}. Then the  r in \mathbb{C} can be represented as a C by C asymmetric binary matrix. If the the rank of c_i is higher than the rank of c_j, i.e. c_i < r\ c_j, the corresponding position of this matrix is set to value of "1". Otherwise the element in that position will be set as the value "0".

Kendall’s Tau [3][4]

Kendall's Tau also refers to Kendall tau rank correlation coefficient, which is commonly used to compare two ranking methods for the same data set.

Suppose r_1 and  r_2 are two ranking method applied to data set \mathbb{C}, the Kendall's Tau between r_1 and r_2 can be represented as follows:

 

\tau (r_1,r_2) = {P-Q \over P%2BQ} = 1- {2Q \over P%2BQ}

where P is the number of the same elements in the upper triangular parts of matrices of  r_1 and r_2, Q is the number of the different elements in the upper triangular parts of matrices of  r_1 and r_2. The diagonals of the matrices are not included in the upper triangular part stated above.

Information Retrieval Quality [5][6][7]

Information retrieval quality is usually evaluated by the following three measurements:

  1. Precision
  2. Recall
  3. Average Precision

For an specific query to a database, let Prelevant be the set of relevant information elements in the database and Pretrieved be the set of the retrieved information elements. Then the above three measurements can be represented as follows:

 \begin{array}{lcl}
Precision ={ \left \vert Prelevant \cap Pretrieved \right \vert \over \left \vert Pretrieved \right \vert}; \\
\\

Recall = {\left \vert Prelevant \cap Pretrieved \right \vert \over \left \vert Prelevant \right \vert}�;\\
\\
AveragePrecision = \int_0^1 {Prec(R_{ecall})}dR_{ecall},\\

\end{array}

where Prec(R_{ecall}) is the Precision function of Recall.

Let r^* and r_{f(q)} be the expected and proposed ranking methods of a database respectively, the lower bound of Average Precision of method r_{f(q)} can be represented as follows:

 AvgPrec(r_{f(q)}) \geqq {1 \over R} \left[ Q%2B \binom{R%2B1}{2} \right ]^{-1}(\sum_{i=1}^R\sqrt{i})^2

where Q is the number of different elements in the upper triangular parts of matrices of r^* and r_{f(q)} and R is the number of relevant elements in the data set.

SVM Classifier [8]

Suppose (\vec x_i,y_i) is the element of a training data set , where \vec x_i is the feature vector (with information about features) and y_i is the label(which classifies the category of \vec x_i). An typical SVM classifier for such data set can be defined as the solution of the following optimization problem.


\begin{array}{lcl}
minimize: V(\vec w, \vec \xi) = {1 \over 2} \vec w \cdot \vec w %2B CF \sum{\xi_i^\sigma} \\
s.t. \\ 
\begin{array}{lcl}
 \sigma \geqq 0;\\
 
 \forall y_i(\vec w \vec x_i %2Bb) \geqq 1-xi_i^\sigma;
 \end{array}

\\

where\\
 \begin{array}{lcl}
 b\ is\ a\ scalar;\\
 \forall y_i \in \left \{ -1,1 \right \};\\
 \forall \xi_i \geqq 0;\\
 \end{array}
\end{array}

The solution of the above optimization problem can be represented as a linear combination of the feature vectors x_is.


 \vec w^* = \sum_i{\alpha_i y_i x_i}

where \alpha_i is the coefficients to be determined.

Ranking SVM algorithm

Loss Function

Let  \tau_{P(f)} be the Kendall's tau between expected ranking method r^* and proposed method r_{f(q)}, it can be proved that maximizing  \tau_{P(f)} helps to minimize the lower bound of the Average Precision of r_{f(q)}.

The negative  \tau_{P(f)} can be selected as the loss function to minimize the lower bound of Average Precision of r_{f(q)}  L_{expected}=-\tau_{P(f)}=-\int \tau(r_{f(q)},r^*)dPr(q,r^*)

where Pr(q,r^*) is the statistical distribution of r^* to certain query q.

Since the expected loss function is not applicable, the following empirical loss function is selected for the training data in practice.

 L_{empirical} = - \tau_S(f)= -{1 \over n} \sum_{i=1}^n{\tau(r_{f(q_i)},r_i^*)}

Collecting training data

n i.i.d. queries are applied to a database and each query corresponds to a ranking method. So The training data set has n elements. Each elements containing a query and the corresponding ranking method.

Feature Space

A mapping function \Phi(q,d)[10][11] is required to map each query and the element of database to a feature space. Then each point in the feature space is labelled with certain rank by ranking method.

Optimization problem

The points generated by the training data are in the feature space, which also carry the rank information (the labels). These labeled points can be used to find the boundary (classifier) that specifies the order of them. In the linear case, such boundary (classifier) is a vector.

Suppose c_i and c_j are two elements in the database and denote (c_i,c_j) \in r if the rank of c_i is higher than c_j in certain ranking method r. Let vector \vec w be the linear classifier candidate in the feature space. Then the ranking problem can be translated to the following SVM classification problem.Note that one ranking method corresponds to one query.



\begin{array}{lcl}
minimize: V(\vec w, \vec \xi) = {1 \over 2} \vec w \cdot \vec w %2B C_{onstant} \sum{\xi_{i,j,k}} \\
s.t. \\ \begin{array}{lcl}
 \forall \xi_{i,j,k} \geqq 0\\
 \forall (c_i, c_j)\in r_k^*\\
 \vec w (\Phi(q_1,c_i)-\Phi(q_1, c_j)) \geqq 1- \xi_{i,j,1};\\
 ...\\
 \vec w (\Phi(q_n, c_i)-\Phi(q_n, c_j)) \geqq 1-\xi_{i,j,n};\\

where\ k \in \left \{ 1,2,...n \right \},\ i,j \in \left \{ 1,2,... \right \}.\\

 
 
 \end{array}
\end{array}

The above optimization problem is identical to the classical SVM classification problem, which is the reason why this algorithm is called Ranking-SVM.

Retrieval Function

The optimal vector \vec w^* obtained by the training sample is

 \vec w^*=\sum{\alpha_{k,l}^*\Phi(q_k,c_i)}

So the retrieval function could be formed based on such optimal classifier.
For new query q, the retrieval function first projects all elements of the database to the feature space. Then it orders these feature points by the values of their inner products with the optimal vector. And the rank of each feature point is the rank of the corresponding element of database for the query q.

Ranking SVM application

Ranking SVM can be applied to rank the pages according to the query. Click-through data is needed to train the algorithm, where Click-through data includes three parts:

  1. Query.
  2. Page rank obtained by previous trained result.
  3. Pages that user clicks.

The combination of 2 and 3 cannot provide full training data order which is needed to apply full svm algorithm. Instead, it provides a part of the ranking information of the training data. So the algorithm can be slightly revised as follows.


\begin{array}{lcl}
minimize: V(\vec w, \vec \xi) = {1 \over 2} \vec w \cdot \vec w %2B C_{ontant} \sum{\xi_{i,j,k}} \\
s.t. \\ \begin{array}{lcl}
 \forall \xi_{i,j,k} \geqq 0\\
 \forall (c_i, c_j)\in r_k^'\\
 \vec w (\Phi(q_1,c_i)-\Phi(q_1,c_j)) \geqq 1- \xi_{i,j,1};\\
 ...\\
 \vec w (\Phi(q_n,c_i)-\Phi(q_n,c_j)) \geqq 1- \xi_{i,j,n};\\
where\ k \in \left \{ 1,2,...n \right \},\ i,j \in \left \{ 1,2,... \right \}.\\
 \end{array}
\end{array}

The method r' does not provide ranking information of the whole dataset, it's a subset of the full ranking method. So the condition of optimization problem becomes more relax compared with the original Ranking-SVM.

Reference

  1. ^ Joachims, T. (2003), "Optimizing Search Engines using Clickthrough Data", Proceedings of the ACM Conference on Knowledge Discovery and Data Mining
  2. ^ Bing Li; Rong Xiao; Zhiwei Li; Rui Cai; Bao-Liang Lu; Lei Zhang; "Rank-SIFT: Learning to rank repeatable local interest points",Computer Vision and Pattern Recognition (CVPR), 2011
  3. ^ M.Kemeny . Rank Correlation Methods, Hafner, 1955
  4. ^ A.Mood, F. Graybill, and D. Boes. Introduction to the Theory of Statistics. McGraw-Hill, 3edtion,1974
  5. ^ J. Kemeny and L. Snell. Mathematical Models in THE Social Sciences. Ginn & Co. 1962
  6. ^ Y. Yao. Measuring retrieval effectiveness based on user preference of documents. Journal of the American Society for Information Science, 46(2): 133-145, 1995.
  7. ^ R.Baeza- Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison- Wesley-Longman, Harlow, UK, May 1999
  8. ^ C. Cortes and V.N Vapnik. Support-vector networks. Machine Learning Journal, 20: 273-297,1995
  9. ^ V.Vapnik. Statistical Learning Theory. WILEY, Chichester,GB,1998
  10. ^ N.Fuhr. Optimum polynomial retrieval functions based on the probability ranking principle. ACM TRANSACTIONS on Information Systems, 7(3): 183-204
  11. ^ N.Fuhr, S.Hartmann, G.Lustig, M.Schwantner, K.Tzeras,and G.Knorz. Air/x - a rule-based multistage indexing system for large subject fields. In RIAO,1991